51 research outputs found
New Integrality Gap Results for the Firefighters Problem on Trees
The firefighter problem is NP-hard and admits a approximation based
on rounding the canonical LP. In this paper, we first show a matching
integrality gap of on the canonical LP. This result relies
on a powerful combinatorial gadget that can be used to prove integrality gap
results for many problem settings. We also consider the canonical LP augmented
with simple additional constraints (as suggested by Hartke). We provide several
evidences that these constraints improve the integrality gap of the canonical
LP: (i) Extreme points of the new LP are integral for some known tractable
instances and (ii) A natural family of instances that are bad for the canonical
LP admits an improved approximation algorithm via the new LP. We conclude by
presenting a integrality gap instance for the new LP.Comment: 22 page
Pre-Reduction Graph Products: Hardnesses of Properly Learning DFAs and Approximating EDP on DAGs
The study of graph products is a major research topic and typically concerns
the term , e.g., to show that . In this paper, we
study graph products in a non-standard form where is a
"reduction", a transformation of any graph into an instance of an intended
optimization problem. We resolve some open problems as applications.
(1) A tight -approximation hardness for the minimum
consistent deterministic finite automaton (DFA) problem, where is the
sample size. Due to Board and Pitt [Theoretical Computer Science 1992], this
implies the hardness of properly learning DFAs assuming (the
weakest possible assumption).
(2) A tight hardness for the edge-disjoint paths (EDP)
problem on directed acyclic graphs (DAGs), where denotes the number of
vertices.
(3) A tight hardness of packing vertex-disjoint -cycles for large .
(4) An alternative (and perhaps simpler) proof for the hardness of properly
learning DNF, CNF and intersection of halfspaces [Alekhnovich et al., FOCS 2004
and J. Comput.Syst.Sci. 2008]
Independent Set, Induced Matching, and Pricing: Connections and Tight (Subexponential Time) Approximation Hardnesses
We present a series of almost settled inapproximability results for three
fundamental problems. The first in our series is the subexponential-time
inapproximability of the maximum independent set problem, a question studied in
the area of parameterized complexity. The second is the hardness of
approximating the maximum induced matching problem on bounded-degree bipartite
graphs. The last in our series is the tight hardness of approximating the
k-hypergraph pricing problem, a fundamental problem arising from the area of
algorithmic game theory. In particular, assuming the Exponential Time
Hypothesis, our two main results are:
- For any r larger than some constant, any r-approximation algorithm for the
maximum independent set problem must run in at least
2^{n^{1-\epsilon}/r^{1+\epsilon}} time. This nearly matches the upper bound of
2^{n/r} (Cygan et al., 2008). It also improves some hardness results in the
domain of parameterized complexity (e.g., Escoffier et al., 2012 and Chitnis et
al., 2013)
- For any k larger than some constant, there is no polynomial time min
(k^{1-\epsilon}, n^{1/2-\epsilon})-approximation algorithm for the k-hypergraph
pricing problem, where n is the number of vertices in an input graph. This
almost matches the upper bound of min (O(k), \tilde O(\sqrt{n})) (by Balcan and
Blum, 2007 and an algorithm in this paper).
We note an interesting fact that, in contrast to n^{1/2-\epsilon} hardness
for polynomial-time algorithms, the k-hypergraph pricing problem admits
n^{\delta} approximation for any \delta >0 in quasi-polynomial time. This puts
this problem in a rare approximability class in which approximability
thresholds can be improved significantly by allowing algorithms to run in
quasi-polynomial time.Comment: The full version of FOCS 201
Graph Pricing Problem on Bounded Treewidth, Bounded Genus and k-partite graphs
Consider the following problem. A seller has infinite copies of products
represented by nodes in a graph. There are consumers, each has a budget and
wants to buy two products. Consumers are represented by weighted edges. Given
the prices of products, each consumer will buy both products she wants, at the
given price, if she can afford to. Our objective is to help the seller price
the products to maximize her profit.
This problem is called {\em graph vertex pricing} ({\sf GVP}) problem and has
resisted several recent attempts despite its current simple solution. This
motivates the study of this problem on special classes of graphs. In this
paper, we study this problem on a large class of graphs such as graphs with
bounded treewidth, bounded genus and -partite graphs.
We show that there exists an {\sf FPTAS} for {\sf GVP} on graphs with bounded
treewidth. This result is also extended to an {\sf FPTAS} for the more general
{\em single-minded pricing} problem. On bounded genus graphs we present a {\sf
PTAS} and show that {\sf GVP} is {\sf NP}-hard even on planar graphs.
We study the Sherali-Adams hierarchy applied to a natural Integer Program
formulation that -approximates the optimal solution of {\sf GVP}.
Sherali-Adams hierarchy has gained much interest recently as a possible
approach to develop new approximation algorithms. We show that, when the input
graph has bounded treewidth or bounded genus, applying a constant number of
rounds of Sherali-Adams hierarchy makes the integrality gap of this natural
{\sf LP} arbitrarily small, thus giving a -approximate solution
to the original {\sf GVP} instance.
On -partite graphs, we present a constant-factor approximation algorithm.
We further improve the approximation factors for paths, cycles and graphs with
degree at most three.Comment: Preprint of the paper to appear in Chicago Journal of Theoretical
Computer Scienc
Sorting Pattern-Avoiding Permutations via 0-1 Matrices Forbidding Product Patterns
We consider the problem of comparison-sorting an -permutation that
avoids some -permutation . Chalermsook, Goswami, Kozma, Mehlhorn, and
Saranurak prove that when is sorted by inserting the elements into the
GreedyFuture binary search tree, the running time is linear in the extremal
function . This is the maximum number
of 1s in an 0-1 matrix avoiding , where
is the permutation matrix of , the Kronecker
product, and . The
same time bound can be achieved by sorting with Kozma and Saranurak's
SmoothHeap.
In this paper we give nearly tight upper and lower bounds on the density of
-free matrices in terms of the inverse-Ackermann
function . \mathrm{Ex}(P_\pi\otimes \text{hat},n) =
\left\{\begin{array}{ll} \Omega(n\cdot 2^{\alpha(n)}), & \mbox{for most
$\pi$,}\\ O(n\cdot 2^{O(k^2)+(1+o(1))\alpha(n)}), & \mbox{for all $\pi$.}
\end{array}\right. As a consequence, sorting -free sequences can be
performed in time. For many corollaries of the
dynamic optimality conjecture, the best analysis uses forbidden 0-1 matrix
theory. Our analysis may be useful in analyzing other classes of access
sequences on binary search trees
On Guillotine Cutting Sequences
Imagine a wooden plate with a set of non-overlapping geometric objects painted on it. How many of them can a carpenter cut out using a panel saw making guillotine cuts, i.e., only moving forward through the material along a straight line until it is split into two pieces? Already fifteen years ago, Pach and Tardos investigated whether one can always cut out a constant fraction if all objects are axis-parallel rectangles. However, even for the case of axis-parallel squares this question is still open. In this paper, we answer the latter affirmatively. Our result is constructive and holds even in a more general setting where the squares have weights and the goal is to save as much weight as possible. We further show that when solving the more general question for rectangles affirmatively with only axis-parallel cuts, this would yield a combinatorial O(1)-approximation algorithm for the Maximum Independent Set of Rectangles problem, and would thus solve a long-standing open problem. In practical applications, like the mentioned carpentry and many other settings, we can usually place the items freely that we want to cut out, which gives rise to the two-dimensional guillotine knapsack problem: Given a collection of axis-parallel rectangles without presumed coordinates, our goal is to place as many of them as possible in a square-shaped knapsack respecting the constraint that the placed objects can be separated by a sequence of guillotine cuts. Our main result for this problem is a quasi-PTAS, assuming the input data to be quasi-polynomially bounded integers. This factor matches the best known (quasi-polynomial time) result for (non-guillotine) two-dimensional knapsack
From Gap-ETH to FPT-Inapproximability: Clique, Dominating Set, and More
We consider questions that arise from the intersection between the areas of
polynomial-time approximation algorithms, subexponential-time algorithms, and
fixed-parameter tractable algorithms. The questions, which have been asked
several times (e.g., [Marx08, FGMS12, DF13]), are whether there is a
non-trivial FPT-approximation algorithm for the Maximum Clique (Clique) and
Minimum Dominating Set (DomSet) problems parameterized by the size of the
optimal solution. In particular, letting be the optimum and be
the size of the input, is there an algorithm that runs in
time and outputs a solution of size
, for any functions and that are independent of (for
Clique, we want )?
In this paper, we show that both Clique and DomSet admit no non-trivial
FPT-approximation algorithm, i.e., there is no
-FPT-approximation algorithm for Clique and no
-FPT-approximation algorithm for DomSet, for any function
(e.g., this holds even if is the Ackermann function). In fact, our results
imply something even stronger: The best way to solve Clique and DomSet, even
approximately, is to essentially enumerate all possibilities. Our results hold
under the Gap Exponential Time Hypothesis (Gap-ETH) [Dinur16, MR16], which
states that no -time algorithm can distinguish between a satisfiable
3SAT formula and one which is not even -satisfiable for some
constant .
Besides Clique and DomSet, we also rule out non-trivial FPT-approximation for
Maximum Balanced Biclique, Maximum Subgraphs with Hereditary Properties, and
Maximum Induced Matching in bipartite graphs. Additionally, we rule out
-FPT-approximation algorithm for Densest -Subgraph although this
ratio does not yet match the trivial -approximation algorithm.Comment: 43 pages. To appear in FOCS'1
- …